The vision community has explored numerous pose guided human editing methods due to their extensive practical applications. Most of these methods still use an image-to-image formulation in which a single image is given as input to produce an edited image as output. However, the problem is ill-defined in cases when the target pose is significantly different from the input pose. Existing methods then resort to in-painting or style transfer to handle occlusions and preserve content. In this paper, we explore the utilization of multiple views to minimize the issue of missing information and generate an accurate representation of the underlying human model. To fuse the knowledge from multiple viewpoints, we design a selector network that takes the pose keypoints and texture from images and generates an interpretable per-pixel selection map. After that, the encodings from a separate network (trained on a single image human reposing task) are merged in the latent space. This enables us to generate accurate, precise, and visually coherent images for different editing tasks. We show the application of our network on 2 newly proposed tasks - Multi-view human reposing, and Mix-and-match human image generation. Additionally, we study the limitations of single-view editing and scenarios in which multi-view provides a much better alternative.
translated by 谷歌翻译
We introduce a new method for diverse foreground generation with explicit control over various factors. Existing image inpainting based foreground generation methods often struggle to generate diverse results and rarely allow users to explicitly control specific factors of variation (e.g., varying the facial identity or expression for face inpainting results). We leverage contrastive learning with latent codes to generate diverse foreground results for the same masked input. Specifically, we define two sets of latent codes, where one controls a pre-defined factor (``known''), and the other controls the remaining factors (``unknown''). The sampled latent codes from the two sets jointly bi-modulate the convolution kernels to guide the generator to synthesize diverse results. Experiments demonstrate the superiority of our method over state-of-the-arts in result diversity and generation controllability.
translated by 谷歌翻译
与计算机视觉合并的基于无人机的遥感系统(UAV)遥感系统具有协助建筑物建设和灾难管理的潜力,例如地震期间的损害评估。可以通过检查来评估建筑物到地震的脆弱性,该检查考虑到相关组件的预期损害进展以及组件对结构系统性能的贡献。这些检查中的大多数是手动进行的,导致高利用人力,时间和成本。本文提出了一种通过基于无人机的图像数据收集和用于后处理的软件库来自动化这些检查的方法,该方法有助于估算地震结构参数。这里考虑的关键参数是相邻建筑物,建筑计划形状,建筑计划区域,屋顶上的对象和屋顶布局之间的距离。通过使用距离测量传感器以及通过Google Earth获得的数据进行的现场测量,可以验证所提出的方法在估计上述参数估算上述参数方面的准确性。可以从https://uvrsabi.github.io/访问其他详细信息和代码。
translated by 谷歌翻译
许多测量机器人和动态障碍状态的商品传感器具有非高斯噪声特征。然而,许多当前的方法将运动和感知的潜在不确定性视为高斯,主要是为了确保计算障碍。另一方面,与非高斯不确定性一起工作的现有计划者不会阐明运动和感知噪声的分布特征,例如偏见以避免有效碰撞。本文通过将避免反应性碰撞解释为碰撞约束违规与Dirac Delta分布之间的分配匹配问题来填补这一空白。为了确保策划者的快速反应性,我们将每个分布嵌入重现Hilbert空间,并将分布匹配重新匹配,以最大程度地减少两个分布之间的最大平均差异(MMD)。我们表明,评估给定对照输入的MMD归结为仅矩阵矩阵产品。我们利用这种见解来开发一种简单的控制抽样方法,以避免动态和不确定的障碍。我们在两个方面推进了最新的。首先,我们进行了广泛的实证研究,以表明我们的计划者可以从样本级别的信息中推断出分布偏差。因此,它使用此见解来指导机器人良好的同型。我们还强调了基本不确定性的高斯近似如何失去偏置估计值,并引导机器人以高碰撞概率为不利状态。其次,我们显示了与以前的非参数和高斯近似反应性碰撞避免碰撞的碰撞方法的拟议分布匹配方法的切实比较优势。
translated by 谷歌翻译
现有的GAN倒置和编辑方法适用于具有干净背景的对齐物体,例如肖像和动物面孔,但通常会为更加困难的类别而苦苦挣扎,具有复杂的场景布局和物体遮挡,例如汽车,动物和室外图像。我们提出了一种新方法,以在gan的潜在空间(例如stylegan2)中倒转和编辑复杂的图像。我们的关键想法是用一系列层的集合探索反演,从而将反转过程适应图像的难度。我们学会预测不同图像段的“可逆性”,并将每个段投影到潜在层。更容易的区域可以倒入发电机潜在空间中的较早层,而更具挑战性的区域可以倒入更晚的特征空间。实验表明,与最新的复杂类别的方法相比,我们的方法获得了更好的反转结果,同时保持下游的编辑性。请参阅我们的项目页面,网址为https://www.cs.cmu.edu/~saminversion。
translated by 谷歌翻译
我们认为具有非正度运动学的代理/机器人的问题避免了许多动态障碍。机器人和障碍物的状态和速度噪声以及机器人的控制噪声被建模为非参数分布,因为噪声模型的高斯假设被侵犯在现实世界中。在这些假设下,我们制定了一种强大的MPC,其以使机器人对准目标状态的方式有效地样本机器人控制,同时避免这种非参数噪声的胁迫下的障碍物。特别地,MPC包括分布匹配成本,其有效地将当前碰撞锥的分布对准到某个所需的分布,其样本是无碰撞的。这种成本在希尔伯特空间中作为距离功能构成,其最小化通常导致碰撞锥样品变得无碰撞。我们通过线性化原始非参数状态和障碍物分布的高斯近似来对比较和显示有形性能增益。我们还通过对非参数噪声的高斯近似构成的方法来表现出卓越的性能,而不会对进一步的线性提出进行这种近似的非参数噪声的高斯近似。性能增益在轨迹长度和控制成本方面都显示,其遵守所提出的方法的功效。据我们所知,这是在存在非参数状态,速度和致动器噪声模型存在下的运动障碍的第一次呈现。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
在运动中的运动中综合动态外观在诸如AR / VR和视频编辑的应用中起着核心作用。虽然已经提出了最近的许多方法来解决这个问题,但处理具有复杂纹理和高动态运动的松散服装仍然仍然具有挑战性。在本文中,我们提出了一种基于视频的外观综合方法,可以解决此类挑战,并为之前尚未显示的野外视频的高质量结果。具体而言,我们采用基于样式的基于STYLEGAN的架构,对基于人的特定视频的运动retrargeting的任务。我们介绍了一种新的运动签名,用于调制发电机权重以捕获动态外观变化以及正规化基于帧的姿势估计以提高时间一致性。我们在一组具有挑战性的视频上评估我们的方法,并表明我们的方法可以定性和定量地实现最先进的性能。
translated by 谷歌翻译
Real-world classification problems typically exhibit an imbalanced or long-tailed label distribution, wherein many labels are associated with only a few samples. This poses a challenge for generalisation on such labels, and also makes naïve learning biased towards dominant labels. In this paper, we present two simple modifications of standard softmax cross-entropy training to cope with these challenges. Our techniques revisit the classic idea of logit adjustment based on the label frequencies, either applied post-hoc to a trained model, or enforced in the loss during training. Such adjustment encourages a large relative margin between logits of rare versus dominant labels. These techniques unify and generalise several recent proposals in the literature, while possessing firmer statistical grounding and empirical performance. A reference implementation of our methods is available at: https://github.com/google-research/google-research/tree/master/logit_adjustment.Recently, long-tail learning has received renewed interest in the context of neural networks. Two active strands of work involve post-hoc normalisation of the classification weights [
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译